Skip to content

feat: Anthropic distill support (llm.api=anthropic-messages)#399

Closed
Coke1120 wants to merge 5 commits intoCortexReach:masterfrom
Coke1120:feat/anthropic-distill
Closed

feat: Anthropic distill support (llm.api=anthropic-messages)#399
Coke1120 wants to merge 5 commits intoCortexReach:masterfrom
Coke1120:feat/anthropic-distill

Conversation

@Coke1120
Copy link
Copy Markdown

@Coke1120 Coke1120 commented Mar 29, 2026

Summary

  • Add createAnthropicApiKeyClient to src/llm-client.ts supporting Anthropic /v1/messages API via llm.api=anthropic-messages
  • Add llm.api and llm.anthropicVersion config fields to index.ts, cli.ts, and openclaw.plugin.json
  • Add examples/new-session-distill/ with a worker demonstrating Anthropic-based lesson extraction
  • Add test/llm-api-key-client.test.mjs and update plugin-manifest-regression to assert new schema fields
  • Document Anthropic config and Bitwarden secret refs in README.md

Merge order

Must merge #398 first. This PR imports resolveSecretValue from src/secret-resolver.ts which is introduced in #398. It will not compile standalone.

Test plan

  • node --test test/llm-api-key-client.test.mjs — verify Anthropic client key handling
  • node test/plugin-manifest-regression.mjs — verify manifest schema declares llm.api and llm.anthropicVersion
  • npm test — full test suite passes
  • Smoke-test distill via examples/new-session-distill/ with a real session (requires live Anthropic API key)

Split from #349.

🤖 Generated with Claude Code

@AliceLJY
Copy link
Copy Markdown
Collaborator

Anthropic client implementation looks good — endpoint normalization, content block parsing, and JSON repair all handled correctly.

This PR includes all changes from #398 (Bitwarden secret resolver) — stacked PRs. #398 needs to merge first, then this PR's diff will be just the Anthropic additions. Will review/approve once #398 lands.

Copy link
Copy Markdown
Collaborator

@AliceLJY AliceLJY left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the Anthropic distill support! Two things to sort out before merging:

  1. Merge order dependency: This PR references resolveSecretValue from secret-resolver.ts, which is introduced in #398. If #398 hasn't merged yet, this PR won't compile standalone. Please confirm the intended merge order, or rebase on top of #398 once it's in.

  2. Test checklist: The PR body has several test items still unchecked (test/llm-api-key-client.test.mjs etc.). Could you confirm all tests are passing and update the checklist?

Once those are clear, happy to approve!

@Coke1120 Coke1120 force-pushed the feat/anthropic-distill branch from d44b00f to bcc1567 Compare March 31, 2026 10:33
@Coke1120 Coke1120 requested a review from AliceLJY March 31, 2026 10:36
Copy link
Copy Markdown
Collaborator

@AliceLJY AliceLJY left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM — Anthropic distill implementation is clean, test coverage solid.

⚠️ Merge order: This PR depends on #398 (Bitwarden secret resolver) for resolveSecretValue. Merge #398 first, then this one.

Assigning to @rwmjhb for merge (after #398).

Coke1120 and others added 4 commits April 2, 2026 02:49
Add createAnthropicApiKeyClient to src/llm-client.ts supporting the
Anthropic /v1/messages API format via llm.api=anthropic-messages.
Add llm.api and llm.anthropicVersion config fields to index.ts, cli.ts,
and openclaw.plugin.json. Add examples/new-session-distill/ with a
worker demonstrating Anthropic-based lesson extraction. Add
test/llm-api-key-client.test.mjs and update plugin-manifest-regression
to assert the new schema fields.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
With async register() now awaited, selfImprovement defaults to enabled
and registers command:new before the sessionMemory assertion runs.
Explicitly disable it in the base test config to isolate the assertion.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…fest test configs

Lost during rebase onto master. selfImprovement now defaults to enabled,
so these test configs need it explicitly disabled to isolate command:new assertions.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@Coke1120 Coke1120 force-pushed the feat/anthropic-distill branch from bcc1567 to 7e82a1b Compare April 1, 2026 18:51
# Conflicts:
#	test/plugin-manifest-regression.mjs
Copy link
Copy Markdown

app3apps commented Apr 3, 2026

这版我觉得还有一个应该在合并前修掉的问题,在 src/llm-client.ts 的 Anthropic 分支。

这里现在是:

  • const jsonStr = extractJsonFromResponse(text)
  • JSON.parse(jsonStr) 失败后
  • fallback 里直接 const repaired = repairCommonJson(jsonStr) as T; return repaired;

问题是 repairCommonJson() 返回的是修复后的 JSON 字符串,不是解析后的对象。也就是说,一旦 Anthropic 返回的是“可修复但非严格合法”的 JSON,这里会把 raw string 当成 T 返回,调用方拿到的类型就错了。OpenAI / OAuth 分支这里都是 JSON.parse(repairedJsonStr),Anthropic 这边应该保持一致。

另外,这个分支也缺了和另外两条路径一致的 if (!jsonStr) guard,所以非 JSON 响应时 lastError 也可能不够准确。

建议这里直接对齐现有 OpenAI / OAuth 逻辑:

  1. 先判断 !jsonStr 并设置明确错误;
  2. fallback 改成 JSON.parse(repairCommonJson(jsonStr))
  3. 补一条 Anthropic malformed-JSON test,避免这个回归再次出现。

@rwmjhb rwmjhb closed this Apr 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants